adversarial ai attack
La veille de la cybersécurité
AI is a rapidly growing technology that has many benefits for society. However, as with all new technologies, misuse is a potential risk. One of the most troubling potential misuses of AI can be found in the form of adversarial AI attacks. In an adversarial AI attack, AI is used to manipulate or deceive another AI system maliciously. Most AI programs learn, adapt and evolve through behavioral learning.
Adversarial AI and the dystopian future of tech
We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - August 3. Join AI and data leaders for insightful talks and exciting networking opportunities. AI is a rapidly growing technology that has many benefits for society. However, as with all new technologies, misuse is a potential risk. One of the most troubling potential misuses of AI can be found in the form of adversarial AI attacks. In an adversarial AI attack, AI is used to manipulate or deceive another AI system maliciously.
Deep Instinct BrandVoice: What Happens When AI Falls Into The Wrong Hands?
Artificial intelligence (AI) is one of the most discussed technology fields today – and for good reason. AI will soon impact nearly every aspect of our lives and we have only just begun scratching the surface of AI's true potential. With AI, we are deepening our knowledge of human genetics and delivering leaps in medicine, deploying self-driving vehicles and robots for an array of industries, and combating fraud and cybercrime, to name just a few of the growing list of applications. However, as with any nascent technology, AI has the potential to cause harm when placed in the wrong hands. We've begun seeing AI used for nefarious purposes, chiefly in the form of AI-facilitated Cyberattacks, and forecast Adversarial AI to be the next challenge to be faced in this area.
Thwarting adversarial AI with context awareness -- GCN
Researchers at the University of California at Riverside are working to teach computer vision systems what objects typically exist in close proximity to one another so that if one is altered, the system can flag it, potentially thwarting malicious interference with artificial intelligence systems. The yearlong project, supported by a nearly $1 million grant from the Defense Advanced Research Projects Agency, aims to understand how hackers target machine-vision systems with adversarial AI attacks. Led by Amit Roy-Chowdhury, an electrical and computer engineering professor at the school's Marlan and Rosemary Bourns College of Engineering, the project is part of the Machine Vision Disruption program within DARPA's AI Explorations program. Adversarial AI attacks – which attempt to fool machine learning models by supplying deceptive input -- are gaining attention. "Adversarial attacks can destabilize AI technologies, rendering them less safe, predictable, or reliable," Carnegie Mellon University Professor David Danks wrote in IEEE's Spectrum in February.
- North America > United States > California (0.25)
- North America > United States > Virginia (0.05)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)
Are You Ready For The Age Of Adversarial AI? Attackers Can Leverage Artificial Intelligence Too
Artificial intelligence (AI) has become the foundation of everyday technologies -- including smartphones, cars, banking apps, home devices and more. In the cybersecurity world, AI is powering new technologies to enhance the detection of malicious behavior and sophisticated threats. Complex models can identify attack trends much faster than previous systems. But what if attackers could exploit the very power of AI to launch new attacks? Is it possible to subvert the AI we depend on, including cybersecurity products, to evade detection?
- North America > United States > California (0.05)
- Europe > Russia (0.05)
- Asia > Russia (0.05)
- (3 more...)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.57)